Transfer a large proportion of parameters and only update a few parameters [1] updates scaling and shifting parameters. [2] updates the miner before generator. [3] uses Fisher information to select the parameters to be updates. Empirically, the last layers are prone to be frozen. Similarly, in [6], the last layers are frozen and scaling/shifting parameters are predicted.
Transfer structure similarity from large dataset to small dataset: [4]
Transfer parameter basis: [5] adapts the singular values of the pre-trained weights while freezing the corresponding singular vectors.
Reference
Atsuhiro Noguchi, Tatsuya Harada: “Image generation from small datasets via batch statistics adaptation.” ICCV (2019)
Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost van de Weijer: “MineGAN: effective knowledge transfer from GANs to target domains with few images.” CVPR (2020).
Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman: “Few-shot Image Generation with Elastic Weight Consolidation.” NeurIPS (2020).
Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang: “Few-shot Image Generation via Cross-domain Correspondence.” CVPR (2021).
Esther Robb, Wen-Sheng Chu, Abhishek Kumar, Jia-Bin Huang: “Few-Shot Adaptation of Generative Adversarial Networks.” arXiv (2020).
Miaoyun Zhao, Yulai Cong, Lawrence Carin: “On Leveraging Pretrained GANs for Generation with Limited Data.” ICML (2020).